Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.542
Filtrar
1.
bioRxiv ; 2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38559114

RESUMO

Group-level analyses have typically associated behavioral signatures with a constrained set of brain areas. Here we show that two behavioral metrics - reaction time (RT) and confidence - can be decoded across the cortex when each individual is considered separately. Subjects (N=50) completed a perceptual decision-making task with confidence. We built models decoding trial-level RT and confidence separately for each subject using the activation patterns in one brain area at a time after splitting the entire cortex into 200 regions of interest (ROIs). At the group level, we replicated previous results by showing that both RT and confidence could be decoded from a small number of ROIs (12.0% and 3.5%, respectively). Critically, at the level of the individual, both RT and confidence could be decoded from most brain regions even after Bonferroni correction (90.0% and 72.5%, respectively). Surprisingly, we observed that many brain regions exhibited opposite brain-behavior relationships across individuals, such that, for example, higher activations predicted fast RTs in some subjects but slow RTs in others. These results were further replicated in a second dataset. Lastly, we developed a simple test to determine the robustness of decoding performance, which showed that several hundred trials per subject are required for robust decoding. These results show that behavioral signatures can be decoded from a much broader range of cortical areas than previously recognized and suggest the need to study the brain-behavior relationship at both the group and the individual level.

2.
Sci Prog ; 107(2): 368504241232537, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38567422

RESUMO

Nasopharyngeal carcinoma is a malignant tumor that occurs in the epithelium and mucosal glands of the nasopharynx, and its pathological type is mostly poorly differentiated squamous cell carcinoma. Since the nasopharynx is located deep in the head and neck, early diagnosis and timely treatment are critical to patient survival. However, nasopharyngeal carcinoma tumors are small in size and vary widely in shape, and it is also a challenge for experienced doctors to delineate tumor contours. In addition, due to the special location of nasopharyngeal carcinoma, complex treatments such as radiotherapy or surgical resection are often required, so accurate pathological diagnosis is also very important for the selection of treatment options. However, the current deep learning segmentation model faces the problems of inaccurate segmentation and unstable segmentation process, which are mainly limited by the accuracy of data sets, fuzzy boundaries, and complex lines. In order to solve these two challenges, this article proposes a hybrid model WET-UNet based on the UNet network as a powerful alternative for nasopharyngeal cancer image segmentation. On the one hand, wavelet transform is integrated into UNet to enhance the lesion boundary information by using low-frequency components to adjust the encoder at low frequencies and optimize the subsequent computational process of the Transformer to improve the accuracy and robustness of image segmentation. On the other hand, the attention mechanism retains the most valuable pixels in the image for us, captures the remote dependencies, and enables the network to learn more representative features to improve the recognition ability of the model. Comparative experiments show that our network structure outperforms other models for nasopharyngeal cancer image segmentation, and we demonstrate the effectiveness of adding two modules to help tumor segmentation. The total data set of this article is 5000, and the ratio of training and verification is 8:2. In the experiment, accuracy = 85.2% and precision = 84.9% can show that our proposed model has good performance in nasopharyngeal cancer image segmentation.


Assuntos
Neoplasias Nasofaríngeas , Humanos , Neoplasias Nasofaríngeas/diagnóstico por imagem , Carcinoma Nasofaríngeo/diagnóstico por imagem , Epitélio , Pescoço
3.
Brain Stimul ; 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38636820

RESUMO

BACKGROUND: Gait impairment has a major impact on quality of life in patients with Parkinson's disease (PD). It is believed that basal ganglia oscillatory activity at ß frequencies (15-30 Hz) may contribute to gait impairment, but the precise dynamics of this oscillatory activity during gait remain unclear. Additionally, auditory cues are known to lead to improvements in gait kinematics in PD. If the neurophysiological mechanisms of this cueing effect were better understood they could be leveraged to treat gait impairments using adaptive Deep Brain Stimulation (aDBS) technologies. OBJECTIVE: We aimed to characterize the dynamics of subthalamic nucleus (STN) oscillatory activity during stepping movements in PD and to establish the neurophysiological mechanisms by which auditory cues modulate gait. METHODS: We studied STN local field potentials (LFPs) in eight PD patients while they performed stepping movements. Hidden Markov Models (HMMs) were used to discover transient states of spectral activity that occurred during stepping with and without auditory cues. RESULTS: The occurrence of low and high ß bursts was suppressed during and after auditory cues. This manifested as a decrease in their fractional occupancy and state lifetimes. Interestingly, α transients showed the opposite effect, with fractional occupancy and state lifetimes increasing during and after auditory cues. CONCLUSIONS: We show that STN oscillatory activity in the α and ß frequency bands are differentially modulated by gait-promoting oscillatory cues. These findings suggest that the enhancement of α rhythms may be an approach for ameliorating gait impairments in PD.

4.
Front Hum Neurosci ; 18: 1305058, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38646159

RESUMO

Introduction: Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods: Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/ and /api/, produced at normal and faster rates. Results: The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8-12 Hz) and beta band (13-30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higher-frequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion: These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system.

5.
J Neural Eng ; 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38621379

RESUMO

Objective. This paper presents data-driven solutions to address two challenges in the problem of linking neural data and behavior: 1) unsupervised analysis of behavioral data and automatic label generation from behavioral observations, and 2) extraction of subject-invariant features for the development of generalized neural decoding models. Approach. For behavioral analysis and label generation, an unsupervised method, which employs an autoencoder to transform behavior data into a cluster-friendly feature space is presented. The model iteratively refines the assigned clusters with soft clustering assignment loss, and gradually improves the learned feature representations. To address subject variability in decoding neural activity, adversarial learning in combination with a long short-term memory-based adversarial variational autoencoder (LSTM-AVAE) model is employed. By using an adversary network to constrain the latent representations, the model captures shared information among subjects' neural activity, making it proper for cross-subject transfer learning. Main results. The proposed approach is evaluated using cortical recordings of Thy1-GCaMP6s transgenic mice obtained via widefield calcium imaging during a motivational licking behavioral experiment. The results show that the proposed model achieves an accuracy of 89.7% in cross-subject neural decoding, outperforming other well-known autoencoder-based feature learning models. These findings suggest that incorporating an adversary network eliminates subject dependency in representations, leading to improved cross-subject transfer learning performance, while also demonstrating the effectiveness of LSTM-based models in capturing the temporal dependencies within neural data. Significance. Results demonstrate the feasibility of the proposed framework in unsupervised clustering and label generation of behavioral data, as well as achieving high accuracy in cross-subject neural decoding, indicating its potentials for relating neural activity to behavior.

6.
Curr Biol ; 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38608677

RESUMO

Prefrontal (PFC) and hippocampal (HPC) sequences of neuronal firing modulated by theta rhythms could represent upcoming choices during spatial memory-guided decision-making. How the PFC-HPC network dynamically coordinates theta sequences to predict specific goal locations and how it is interrupted in memory impairments induced by amyloid beta (Aß) remain unclear. Here, we detected theta sequences of firing activities of PFC neurons and HPC place cells during goal-directed spatial memory tasks. We found that PFC ensembles exhibited predictive representation of the specific goal location since the starting phase of memory retrieval, earlier than the hippocampus. High predictive accuracy of PFC theta sequences existed during successful memory retrieval and positively correlated with memory performance. Coordinated PFC-HPC sequences showed PFC-dominant prediction of goal locations during successful memory retrieval. Furthermore, we found that theta sequences of both regions still existed under Aß accumulation, whereas their predictive representation of goal locations was weakened with disrupted spatial representation of HPC place cells and PFC neurons. These findings highlight the essential role of coordinated PFC-HPC sequences in successful memory retrieval of a precise goal location.

7.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38629796

RESUMO

Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.


Assuntos
Córtex Auditivo , Mapeamento Encefálico , Mapeamento Encefálico/métodos , Imaginação , Encéfalo/diagnóstico por imagem , Percepção Auditiva , Córtex Cerebral , Imageamento por Ressonância Magnética/métodos , Córtex Auditivo/diagnóstico por imagem
8.
IEEE J Transl Eng Health Med ; 12: 371-381, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38633564

RESUMO

Brain state classification by applying deep learning techniques on neuroimaging data has become a recent topic of research. However, unlike domains where the data is low dimensional or there are large number of available training samples, neuroimaging data is high dimensional and has few training samples. To tackle these issues, we present a sparse feedforward deep neural architecture for encoding and decoding the structural connectome of the human brain. We use a sparsely connected element-wise multiplication as the first hidden layer and a fixed transform layer as the output layer. The number of trainable parameters and the training time is significantly reduced compared to feedforward networks. We demonstrate superior performance of this architecture in encoding the structural connectome implicated in Alzheimer's disease (AD) and Parkinson's disease (PD) from DTI brain scans. For decoding, we propose recursive feature elimination (RFE) algorithm based on DeepLIFT, layer-wise relevance propagation (LRP), and Integrated Gradients (IG) algorithms to remove irrelevant features and thereby identify key biomarkers associated with AD and PD. We show that the proposed architecture reduces 45.1% and 47.1% of the trainable parameters compared to a feedforward DNN with an increase in accuracy by 2.6 % and 3.1% for cognitively normal (CN) vs AD and CN vs PD classification, respectively. We also show that the proposed RFE method leads to a further increase in accuracy by 2.1% and 4% for CN vs AD and CN vs PD classification, while removing approximately 90% to 95% irrelevant features. Furthermore, we argue that the biomarkers (i.e., key brain regions and connections) identified are consistent with previous literature. We show that relevancy score-based methods can yield high discriminative power and are suitable for brain decoding. We also show that the proposed approach led to a reduction in the number of trainable network parameters, an increase in classification accuracy, and a detection of brain connections and regions that were consistent with earlier studies.


Assuntos
Doença de Alzheimer , Conectoma , Humanos , Imageamento por Ressonância Magnética/métodos , Conectoma/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Biomarcadores
9.
Front Psychol ; 15: 1357590, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38659686

RESUMO

Introduction: Reading comprehension is one of the most important skills learned in school and it has an important contribution to the academic success of children with Autism Spectrum Disorder (ASD). Though previous studies have investigated reading comprehension difficulties in ASD and highlighted factors that contribute to these difficulties, this evidence has mainly stemmed from children with ASD and intact cognitive skills. Also, much emphasis has been placed on the relation between reading comprehension and word recognition skills, while the role of other skills, including fluency and morphosyntax, remains underexplored. This study addresses these gaps by investigating reading comprehension in two groups of school-aged children with ASD, one with intact and one with low cognitive abilities, also exploring the roles of word decoding, fluency and morphosyntax in each group's reading comprehension performance. Methods: The study recruited 16 children with ASD and low cognitive abilities, and 22 age-matched children with ASD and intact cognitive skills. The children were assessed on four reading subdomains, namely, decoding, fluency, morphosyntax, and reading comprehension. Results: The children with ASD and low cognitive abilities scored significantly lower than their peers with intact cognitive abilities in all reading subdomains, except for decoding, verb production and compound word formation. Regression analyses showed that reading comprehension in the group with ASD and intact cognitive abilities was independently driven by their decoding and fluency skills, and to a lesser extent, by morphosyntax. On the other hand, the children with ASD and low cognitive abilities mainly drew on their decoding, and to a lesser extent, their morphosyntactic skills to perform in reading comprehension. Discussion: The results suggest that reading comprehension was more strongly affected in the children with ASD and low cognitive abilities as compared to those with intact cognitive skills. About half of the children with ASD and intact cognitive skills also exhibited mild-to-moderate reading comprehension difficulties, further implying that ASD may influence reading comprehension regardless of cognitive functioning. Finally, strengths in decoding seemed to predominantly drive cognitively-impaired children's reading performance, while the group with ASD and intact cognitive skills mainly recruited fluency and metalinguistic lexical skills to cope with reading comprehension demands, further suggesting that metalinguistic awareness may be a viable way to enhance reading comprehension in ASD.

10.
J Neural Eng ; 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38648781

RESUMO

OBJECTIVE: Invasive brain-computer interfaces (BCIs) are promising communication devices for severely paralyzed patients. Recent advances in intracranial electroencephalography (iEEG) coupled with natural language processing have enhanced communication speed and accuracy. It should be noted that such a speech BCI uses signals from the motor cortex. However, BCIs based on motor cortical activities may experience signal deterioration in users with motor cortical degenerative diseases such as amyotrophic lateral sclerosis (ALS). An alternative approach to using iEEG of the motor cortex is necessary to support patients with such conditions. Approach: In this study, a multimodal embedding of text and images was used to decode visual semantic information from iEEG signals of the visual cortex to generate text and images. We used contrastive language-image pretraining (CLIP) embedding to represent images presented to 17 patients implanted with electrodes in the occipital and temporal cortexes. A CLIP image vector was inferred from the high-γ power of the iEEG signals recorded while viewing the images. Main results: Text was generated by CLIPCAP from the inferred CLIP vector with better-than-chance accuracy. Then, an image was created from the generated text using StableDiffusion with significant accuracy. Significance: The text and images generated from iEEG through the CLIP embedding vector can be used for improved communication. .

11.
J Neural Eng ; 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38648783

RESUMO

OBJECTIVE: Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to characterize the amount of thalamic neurons necessary for high accuracy decoding. Approach. We intraoperatively recorded single neuron activity in the left Vim of 8 neurosurgical patients undergoing implantation of deep brain stimulator or RF lesioning during production, perception and imagery of the five monophthongal vowel sounds. We utilized the Spade decoder, a machine learning algorithm that dynamically learns specific features of firing patterns and is based on sparse decomposition of the high dimensional feature space. Main results. Spade outperformed all algorithms compared with, for all three aspects of speech: production, perception and imagery, and obtained accuracies of 100%, 96%, and 92%, respectively (chance level: 20%) based on pooling together neurons across all patients. The accuracy was logarithmic in the amount of neurons for all three aspects of speech. Regardless of the amount of units employed, production gained highest accuracies, whereas perception and imagery equated with each other. Significance. Our research renders single neuron activity in the left Vim a promising source of inputs to BMIs for restoration of speech faculties for locked-in patients or patients with anarthria or dysarthria to allow them to communicate again. Our characterization of how many neurons are necessary to achieve a certain decoding accuracy is of utmost importance for planning BMI implantation. .

12.
J Neural Eng ; 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38648782

RESUMO

PURPOSE: Brain-Computer Interfaces (BCIs) have the potential to reinstate lost communication faculties. Results from speech decoding studies indicate that a usable speech BCI based on activity in the sensorimotor cortex (SMC) can be achieved using subdurally implanted electrodes. However, the optimal characteristics for a successful speech implant are largely unknown. We address this topic in a high field blood oxygenation level dependent (BOLD) functional Magnetic Resonance Imaging (fMRI) study, by assessing the decodability of spoken words as a function of hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal-axis. Methods: Twelve subjects conducted a 7T fMRI experiment in which they pronounced 6 different pseudo-words over 6 runs. We divided the SMC by hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal axis. Classification was performed on in these SMC areas using multiclass Support Vector Machine (SVM). Results: Significant classification was possible from the SMC, but no preference for the left or right hemisphere, nor for the precentral or postcentral gyrus for optimal word classification was detected. Classification while using information from the cortical surface was slightly better than when using information from deep in the central sulcus, and was highest within the ventral 50% of SMC. Confusion matrices where highly similar across the entire SMC. An SVM-searchlight analysis revealed significant classification in the superior temporal gyrus in addition to the SMC. Conclusion: The current results support a unilateral implant using surface electrodes, covering the ventral 50% of the SMC. The added value of depth electrodes is unclear. We did not observe evidence for variations in the qualitative nature of information across SMC. The current results need to be confirmed in paralyzed patients performing attempted speech.

13.
Front Microbiol ; 15: 1369760, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38500588

RESUMO

Ribosomes stall on truncated or otherwise damaged mRNAs. Bacteria rely on ribosome rescue mechanisms to replenish the pool of ribosomes available for translation. Trans-translation, the main ribosome-rescue pathway, uses a circular hybrid transfer-messenger RNA (tmRNA) to restart translation and label the resulting peptide for degradation. Previous studies have visualized how tmRNA and its helper protein SmpB interact with the stalled ribosome to establish a new open reading frame. As tmRNA presents the first alanine codon via a non-canonical mRNA path in the ribosome, the incoming alanyl-tRNA must rearrange the tmRNA molecule to read the codon. Here, we describe cryo-EM analyses of an endogenous Escherichia coli ribosome-tmRNA complex with tRNAAla accommodated in the A site. The flexible adenosine-rich tmRNA linker, which connects the mRNA-like domain with the codon, is stabilized by the minor groove of the canonically positioned anticodon stem of tRNAAla. This ribosome complex can also accommodate a tRNA near the E (exit) site, bringing insights into the translocation and dissociation of the tRNA that decoded the defective mRNA prior to tmRNA binding. Together, these structures uncover a key step of ribosome rescue, in which the ribosome starts translating the tmRNA reading frame.

14.
J Neural Eng ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38513289

RESUMO

OBJECTIVE: The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments for example can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. APPROACH: Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. MAIN RESULTS: We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded from a macaque monkey performing an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. SIGNIFICANCE: This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic setups without constrained tasks or prior knowledge of event times.

15.
Front Neurosci ; 18: 1345308, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38486966

RESUMO

Introduction: Language impairments often result from severe neurological disorders, driving the development of neural prosthetics utilizing electrophysiological signals to restore comprehensible language. Previous decoding efforts primarily focused on signals from the cerebral cortex, neglecting subcortical brain structures' potential contributions to speech decoding in brain-computer interfaces. Methods: In this study, stereotactic electroencephalography (sEEG) was employed to investigate subcortical structures' role in speech decoding. Two native Mandarin Chinese speakers, undergoing sEEG implantation for epilepsy treatment, participated. Participants read Chinese text, with 1-30, 30-70, and 70-150 Hz frequency band powers of sEEG signals extracted as key features. A deep learning model based on long short-term memory assessed the contribution of different brain structures to speech decoding, predicting consonant articulatory place, manner, and tone within single syllable. Results: Cortical signals excelled in articulatory place prediction (86.5% accuracy), while cortical and subcortical signals performed similarly for articulatory manner (51.5% vs. 51.7% accuracy). Subcortical signals provided superior tone prediction (58.3% accuracy). The superior temporal gyrus was consistently relevant in speech decoding for consonants and tone. Combining cortical and subcortical inputs yielded the highest prediction accuracy, especially for tone. Discussion: This study underscores the essential roles of both cortical and subcortical structures in different aspects of speech decoding.

16.
J Neural Eng ; 21(2)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38471169

RESUMO

Objective. Chronic motor impairments of arms and hands as the consequence of a cervical spinal cord injury (SCI) have a tremendous impact on activities of daily life. A considerable number of people however retain minimal voluntary motor control in the paralyzed parts of the upper limbs that are measurable by electromyography (EMG) and inertial measurement units (IMUs). An integration into human-machine interfaces (HMIs) holds promise for reliable grasp intent detection and intuitive assistive device control.Approach. We used a multimodal HMI incorporating EMG and IMU data to decode reach-and-grasp movements of groups of persons with cervical SCI (n = 4) and without (control, n = 13). A post-hoc evaluation of control group data aimed to identify optimal parameters for online, co-adaptive closed-loop HMI sessions with persons with cervical SCI. We compared the performance of real-time, Random Forest-based movement versus rest (2 classes) and grasp type predictors (3 classes) with respect to their co-adaptation and evaluated the underlying feature importance maps.Main results. Our multimodal approach enabled grasp decoding significantly better than EMG or IMU data alone (p<0.05). We found the 0.25 s directly prior to the first touch of an object to hold the most discriminative information. Our HMIs correctly predicted 79.3 ± STD 7.4 (102.7 ± STD 2.3 control group) out of 105 trials with grand average movement vs. rest prediction accuracies above 99.64% (100% sensitivity) and grasp prediction accuracies of 75.39 ± STD 13.77% (97.66 ± STD 5.48% control group). Co-adaption led to higher prediction accuracies with time, and we could identify adaptions in feature importances unique to each participant with cervical SCI.Significance. Our findings foster the development of multimodal and adaptive HMIs to allow persons with cervical SCI the intuitive control of assistive devices to improve personal independence.


Assuntos
Medula Cervical , Traumatismos da Medula Espinal , Humanos , Eletromiografia/métodos , Mãos , Braço , Força da Mão
17.
Med Image Anal ; 94: 103136, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38489895

RESUMO

Decoding brain states under different cognitive tasks from functional magnetic resonance imaging (fMRI) data has attracted great attention in the neuroimaging filed. However, the well-known temporal dependency in fMRI sequences has not been fully exploited in existing studies, due to the limited temporal-modeling capacity of the backbone machine learning algorithms and rigid training sample organization strategies upon which the brain decoding methods are built. To address these limitations, we propose a novel method for fine-grain brain state decoding, namely, group deep bidirectional recurrent neural network (Group-DBRNN) model. We first propose a training sample organization strategy that consists of a group-task sample generation module and a multiple-scale random fragment strategy (MRFS) module to collect training samples that contain rich task-relevant brain activity contrast (i.e., the comparison of neural activity patterns between different tasks) and maintain the temporal dependency. We then develop a novel decoding model by replacing the unidirectional RNNs that are widely used in existing brain state decoding studies with bidirectional stacked RNNs to better capture the temporal dependency, and by introducing a multi-task interaction layer (MTIL) module to effectively model the task-relevant brain activity contrast. Our experimental results on the Human Connectome Project task fMRI dataset (7 tasks consisting of 23 task sub-type states) show that the proposed model achieves an average decoding accuracy of 94.7% over the 23 fine-grain sub-type states. Meanwhile, our extensive interpretations of the intermediate features learned in the proposed model via visualizations and quantitative assessments of their discriminability and inter-subject alignment evidence that the proposed model can effectively capture the temporal dependency and task-relevant contrast.


Assuntos
Encéfalo , Conectoma , Humanos , Encéfalo/diagnóstico por imagem , Redes Neurais de Computação , Conectoma/métodos , Algoritmos , Imageamento por Ressonância Magnética/métodos
18.
Comput Biol Med ; 173: 108384, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38554657

RESUMO

Reliable prediction of multi-finger forces is crucial for neural-machine interfaces. Various neural decoding methods have progressed substantially for accurate motor output predictions. However, most neural decoding methods are performed in a supervised manner, i.e., the finger forces are needed for model training, which may not be suitable in certain contexts, especially in scenarios involving individuals with an arm amputation. To address this issue, we developed an unsupervised neural decoding approach to predict multi-finger forces using spinal motoneuron firing information. We acquired high-density surface electromyogram (sEMG) signals of the finger extensor muscle when subjects performed single-finger and multi-finger tasks of isometric extensions. We first extracted motor units (MUs) from sEMG signals of the single-finger tasks. Because of inevitable finger muscle co-activation, MUs controlling the non-targeted fingers can also be recruited. To ensure an accurate finger force prediction, these MUs need to be teased out. To this end, we clustered the decomposed MUs based on inter-MU distances measured by the dynamic time warping technique, and we then labeled the MUs using the mean firing rate or the firing rate phase amplitude. We merged the clustered MUs related to the same target finger and assigned weights based on the consistency of the MUs being retained. As a result, compared with the supervised neural decoding approach and the conventional sEMG amplitude approach, our new approach can achieve a higher R2 (0.77 ± 0.036 vs. 0.71 ± 0.11 vs. 0.61 ± 0.09) and a lower root mean square error (5.16 ± 0.58 %MVC vs. 5.88 ± 1.34 %MVC vs. 7.56 ± 1.60 %MVC). Our findings can pave the way for the development of accurate and robust neural-machine interfaces, which can significantly enhance the experience during human-robotic hand interactions in diverse contexts.


Assuntos
Dedos , Mãos , Humanos , Dedos/fisiologia , Músculo Esquelético/fisiologia , Eletromiografia/métodos , Neurônios Motores/fisiologia
19.
J Psycholinguist Res ; 53(2): 27, 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38470546

RESUMO

This study aimed to examine the validity of the "simple view of reading" (SVR) model in the diglossic Arabic language. Using a longitudinal design, we tested whether decoding and listening comprehension (LC) in kindergarten can later predict reading comprehension (RC) in the first grade and whether the contribution of LC to RC differs between the spoken and literary varieties of Arabic. The participants were 261 kindergartners who were followed to the first grade. Our results from separate SEM analysis for the spoken and literary varieties revealed some similarity between the explained variance in the spoken (52%) and literary (48%) variety models. However, while the contribution of LC to RC was higher than the contribution of decoding in the spoken variety model, an opposite pattern was observed in the literary variety model. The results are discussed in light of the diglossia phenomenon and its impact on comprehension skills in Arabic, with theoretical and pedagogical implications.


Assuntos
Compreensão , Leitura , Criança , Pré-Escolar , Humanos , Idioma , Instituições Acadêmicas
20.
Neurosci Biobehav Rev ; 160: 105607, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38428473

RESUMO

Risk-taking is a common, complex, and multidimensional behavior construct that has significant implications for human health and well-being. Previous research has identified the neural mechanisms underlying risk-taking behavior in both adolescents and adults, yet the differences between adolescents' and adults' risk-taking in the brain remain elusive. This study firstly employs a comprehensive meta-analysis approach that includes 73 adult and 20 adolescent whole-brain experiments, incorporating observations from 1986 adults and 789 adolescents obtained from online databases, including Web of Science, PubMed, ScienceDirect, Google Scholar and Neurosynth. It then combines functional decoding methods to identify common and distinct brain regions and corresponding psychological processes associated with risk-taking behavior in these two cohorts. The results indicated that the neural bases underlying risk-taking behavior in both age groups are situated within the cognitive control, reward, and sensory networks. Subsequent contrast analysis revealed that adolescents and adults risk-taking engaged frontal pole within the fronto-parietal control network (FPN), but the former recruited more ventrolateral area and the latter recruited more dorsolateral area. Moreover, adolescents' risk-taking evoked brain area activity within the ventral attention network (VAN) and the default mode network (DMN) compared with adults, consistent with the functional decoding analyses. These findings provide new insights into the similarities and disparities of risk-taking neural substrates underlying different age cohorts, supporting future neuroimaging research on the dynamic changes of risk-taking.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Adulto , Humanos , Adolescente , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Lobo Frontal , Mapeamento Encefálico , Neuroimagem , Assunção de Riscos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...